Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument

نویسنده

  • Sebastian Benthall
چکیده

In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become ”superintelligent” and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent’s ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage. The appetite of the public and prominent intellectuals for the study of the ethical implications of artificial intelligence has increased in recent years. One captivating possibility is that artificial intelligence research might result in a ‘superintelligence’ that puts humanity at risk. (Russell 2014) has called for AI researchers to consider this possibility seriously because, however unlikely, its mere possibility is grave. (Bostrom 2014) argues for the importance of considering the risks of artificial intelligence as a research agenda. For Bostrom, the potential risks of artificial intelligence are not just at the scale of industrial mishaps or weapons of mass destruction. Rather, Bostrom argues that artificial intelligence has the potential to threaten humanity as a whole and determine the fate of the universe. We approach this grand thesis with a measure of skepticism. Nevertheless, we hope that by elucidating the argument and considering potential objections in good faith, we can get a better grip on the realistic ethical implications of artificial intelligence. This paper is in that spirit. We consider the argument for this AI doomsday scenario proposed by Bostrom (Bostrom 2014). Section 1 summarizes Bostrom’s argument and motivates the work of the rest of the paper. In focuses on the conditions of an “intelligence explosion” that would lead to a dominant machine intelligence averse to humanity. Section 2 argues that rather than speculating broadly about general artificial intelligence, we can predict outcomes of Copyright c © 2015, Sebastian Benthall. artificial intelligence by considering more narrowly a few tasks that are essential to instrumental reasoning. Section 3 considers recalcitrance, the resistance of a system to improvements to its own intelligence, and the ways it can limit intelligence explosion. Section 4 contains an analysis of the recalcitrance of prediction, using a Bayesian model of a predictive agent. We conclude that prediction is not something an agent can easily improve upon autonomously. Section 5 discusses the implication of these findings for further investigation into AI risk. 1 Bostrom’s core argument and definitions Bostrom makes a number of claims in the course of his argument which I will outline here as distinct propositions. Proposition 1. A system with sufficient intelligence relative to other intelligent systems will have a ‘decisive strategic advantage’ and will determine the fate of the world and uni-

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Paradox and Relativism

Since the time of Plato, relativism has been attacked as a self-refuting theory. Today, there are two basic kinds of argument that are used to show that global relativism is logically incoherent: first, a direct descendent of the argument Plato uses against Protagoras, called the peritrope; and, second, a more recent argument that relativism leads to an infinite regress. Although some relativis...

متن کامل

Cohen on the Kalam Cosmological Argument

Yishai Cohen raises three related objections to the kalam cosmological argument . Firstly, Cohen argues that, if the argument against the possibility of an actual infinite, which is used to support the kalam cosmological argument, is sound, then a predetermined endless future must also be impossible . Secondly, Cohen argues that the possibility of a predetermined endless future entails the poss...

متن کامل

Fear-based appeals in HIV prevention

You could argue that HIV prevention education in this country exists in the long shadow cast by the ‘Grim Reaper’, which appeared as part of a general HIV awareness campaign for less than three weeks in 1987. Focus groups conducted to explore fear appeals in the context of the current epidemic, consisting (in part) of young gay men who were in their infancy when the Grim Reaper appeared on tele...

متن کامل

Sleeping Beauty in Flatland

I describe in this paper a solution to the Sleeping Beauty problem. I begin with the consensual emerald case and I also discuss Bostrom's Incubator gedanken. I address then the Sleeping Beauty problem. I argue that the root cause of the flaw in the argument for 1/3 is an erroneous assimilation with a repeated experiment. Lastly, I present an informative variant of the original Sleeping Beauty e...

متن کامل

Sleeping Beauty and the Problem of World Reduction

I describe in this paper a solution to the Sleeping Beauty problem. I begin with the consensual emerald case and discuss then Bostrom's Incubator gedanken. I address then the Sleeping Beauty problem. I argue that the root cause of the flaw in the argument for 1/3 is an erroneous assimilation with a repeated experiment. I show that the same type of analysis also applies to Elga's version of the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1702.08495  شماره 

صفحات  -

تاریخ انتشار 2017